Skip to content

Conversation

@grypp
Copy link
Member

@grypp grypp commented Aug 7, 2025

This PR implements structured, tooling-friendly optimization remarks with zero cost unless enabled. It implements:

  • RemarkEngine collects finalized remarks within MLIRContext.
  • MLIRRemarkStreamerBase abstract class streams them to a backend.
  • Backends: MLIRLLVMRemarkStreamer (bridges to llvm::remarks → YAML/Bitstream) or your own custom streamer.
  • Optional mirroring to DiagnosticEngine (printAsEmitRemarks + categories).
  • Off by default; no behavior change unless enabled. Thread-safe; ordering best-effort.

Overview

Passes (reportOptimization*)
         │
         ▼
+-------------------+
|  RemarkEngine     |   collects
+-------------------+
     │         │
     │ mirror  │ stream
     ▼         ▼
emitRemark    MLIRRemarkStreamerBase (abstract)
                   │
                   ├── MLIRLLVMRemarkStreamer → llvm::remarks → YAML | Bitstream
                   └── CustomStreamer → your sink

Enable Remark engine and Plug LLVM's Remark streamer

// Enable once per MLIRContext. This uses `MLIRLLVMRemarkStreamer`
mlir::remark::enableOptimizationRemarksToFile(
    ctx, path, llvm::remarks::Format::YAML, cats);

API to emit remark

// Emit from a pass
 remark::passed(loc, categoryVectorizer, myPassname1)
        << "vectorized loop";

remark::missed(loc, categoryUnroll, "MyPass")
        << remark::reason("not profitable at this size")   // Creates structured reason arg
        << remark::suggest("increase unroll factor to >=4");   // Creates structured suggestion arg

remark::passed(loc, categoryVectorizer, myPassname1)
        << "vectorized loop" 
        << remark::metric("tripCount", 128);                // Create structured metric on-the-fly

@grypp grypp requested review from Copilot, joker-eph and jpienaar August 7, 2025 10:05
@llvmbot llvmbot added mlir:core MLIR Core Infrastructure mlir labels Aug 7, 2025
@llvmbot
Copy link
Member

llvmbot commented Aug 7, 2025

@llvm/pr-subscribers-mlir

@llvm/pr-subscribers-mlir-core

Author: Guray Ozen (grypp)

Changes

This patch hooks LLVM’s remarks infrastructure into MLIR.

  • Implements RemarkEngine. MLIRContext holds one.
  • Disabled by default, zero-cost when off. Enable per context with MLIRContext::setupOptimizationRemarks(...).
  • YAML or bitstream output: via new MLIRRemarkStreamer (subclasses llvm::remarks::RemarkStreamer).
  • Implements simple APIs for passes:

User-code can directly prints remarks like:

reportOptimizationPass(loc, categoryName, "MyPass")        &lt;&lt; "vectorized";
reportOptimizationMiss(loc, categoryLoopUnroll, "MyPass", "try increasing unroll factor") &lt;&lt; "...";
reportOptimizationFail(loc, categoryLoopUnroll, "MyPass")  &lt;&lt; "...";
reportOptimizationAnalysis(loc, categoryLoopUnroll, "MyPass") &lt;&lt; "estimated trip count: " &lt;&lt; tripCount;

Patch is 37.47 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/152474.diff

7 Files Affected:

  • (modified) mlir/include/mlir/IR/MLIRContext.h (+15)
  • (added) mlir/include/mlir/IR/Remarks.h (+454)
  • (modified) mlir/lib/IR/CMakeLists.txt (+6)
  • (modified) mlir/lib/IR/MLIRContext.cpp (+35)
  • (added) mlir/lib/IR/Remarks.cpp (+296)
  • (modified) mlir/unittests/IR/CMakeLists.txt (+1)
  • (added) mlir/unittests/IR/RemarkTest.cpp (+106)
diff --git a/mlir/include/mlir/IR/MLIRContext.h b/mlir/include/mlir/IR/MLIRContext.h
index ef8dab87f131a..ab8f4aed3fc79 100644
--- a/mlir/include/mlir/IR/MLIRContext.h
+++ b/mlir/include/mlir/IR/MLIRContext.h
@@ -32,6 +32,7 @@ class InFlightDiagnostic;
 class Location;
 class MLIRContextImpl;
 class RegisteredOperationName;
+class RemarkEngine;
 class StorageUniquer;
 class IRUnit;
 
@@ -212,6 +213,9 @@ class MLIRContext {
   /// Returns the diagnostic engine for this context.
   DiagnosticEngine &getDiagEngine();
 
+  /// Returns the remark engine for this context.
+  std::unique_ptr<RemarkEngine> &getRemarkEngine();
+
   /// Returns the storage uniquer used for creating affine constructs.
   StorageUniquer &getAffineUniquer();
 
@@ -245,6 +249,14 @@ class MLIRContext {
   /// (attributes, operations, types, etc.).
   llvm::hash_code getRegistryHash();
 
+  /// Set up optimization remarks for the context.
+  void setupOptimizationRemarks(
+      StringRef outputPath, StringRef outputFormat, bool printAsEmitRemarks,
+      const std::optional<std::string> &categoryPassName = std::nullopt,
+      const std::optional<std::string> &categoryMissName = std::nullopt,
+      const std::optional<std::string> &categoryAnalysisName = std::nullopt,
+      const std::optional<std::string> &categoryFailedName = std::nullopt);
+
   //===--------------------------------------------------------------------===//
   // Action API
   //===--------------------------------------------------------------------===//
@@ -281,6 +293,9 @@ class MLIRContext {
   }
 
 private:
+  /// Set the remark engine for this context.
+  void setRemarkEngine(std::unique_ptr<RemarkEngine> engine);
+
   /// Return true if the given dialect is currently loading.
   bool isDialectLoading(StringRef dialectNamespace);
 
diff --git a/mlir/include/mlir/IR/Remarks.h b/mlir/include/mlir/IR/Remarks.h
new file mode 100644
index 0000000000000..8f7e7a900e7f1
--- /dev/null
+++ b/mlir/include/mlir/IR/Remarks.h
@@ -0,0 +1,454 @@
+//===- Remarks.h - MLIR Optimization Remark ----------------------*- C++-*-===//
+//
+// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
+// See https://llvm.org/LICENSE.txt for license information.
+// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
+//
+//===----------------------------------------------------------------------===//
+//
+// This file defines utilities for emitting optimization remarks.
+//
+//===----------------------------------------------------------------------===//
+
+#ifndef MLIR_IR_REMARKS_H
+#define MLIR_IR_REMARKS_H
+
+#include "llvm/IR/DiagnosticInfo.h"
+#include "llvm/IR/DiagnosticPrinter.h"
+#include "llvm/Remarks/Remark.h"
+#include "llvm/Remarks/RemarkStreamer.h"
+#include "llvm/Support/ToolOutputFile.h"
+
+#include "mlir/IR/Diagnostics.h"
+#include "mlir/IR/MLIRContext.h"
+
+namespace mlir {
+
+/// Defines different remark kinds that can be used to categorize remarks.
+enum class RemarkKind {
+  OptimizationRemarkUnknown = 0,
+  OptimizationRemarkPass,
+  OptimizationRemarkMissed,
+  OptimizationRemarkFailure,
+  OptimizationRemarkAnalysis,
+};
+
+//===----------------------------------------------------------------------===//
+// Remark Base Class
+//===----------------------------------------------------------------------===//
+class RemarkBase : public llvm::DiagnosticInfo {
+
+public:
+  RemarkBase(RemarkKind remarkKind, DiagnosticSeverity severity,
+             const char *passName, StringRef remarkName, Location loc,
+             std::optional<StringRef> functionName = std::nullopt)
+      : llvm::DiagnosticInfo(makeLLVMKind(remarkKind),
+                             makeLLVMSeverity(severity)),
+        remarkKind(remarkKind), functionName(functionName), loc(loc),
+        passName(passName), remarkName(remarkName) {}
+
+  struct SetIsVerbose {};
+
+  struct SetExtraArgs {};
+
+  struct RemarkKeyValue {
+    std::string key;
+    std::string val;
+
+    explicit RemarkKeyValue(StringRef str = "") : key("String"), val(str) {}
+    RemarkKeyValue(StringRef key, Value value);
+    RemarkKeyValue(StringRef key, Type type);
+    RemarkKeyValue(StringRef key, StringRef s);
+    RemarkKeyValue(StringRef key, const char *s)
+        : RemarkKeyValue(key, StringRef(s)) {};
+    RemarkKeyValue(StringRef key, int n);
+    RemarkKeyValue(StringRef key, float n);
+    RemarkKeyValue(StringRef key, long n);
+    RemarkKeyValue(StringRef key, long long n);
+    RemarkKeyValue(StringRef key, unsigned n);
+    RemarkKeyValue(StringRef key, unsigned long n);
+    RemarkKeyValue(StringRef key, unsigned long long n);
+    RemarkKeyValue(StringRef key, bool b)
+        : key(key), val(b ? "true" : "false") {}
+  };
+
+  void insert(StringRef s);
+  void insert(RemarkKeyValue a);
+  void insert(SetIsVerbose v);
+  void insert(SetExtraArgs ea);
+
+  void print(llvm::DiagnosticPrinter &dp) const override;
+  void print() const;
+
+  virtual bool isEnabled() const = 0;
+  Location getLocation() const { return loc; }
+  llvm::remarks::RemarkLocation getRemarkLocation() const;
+  /// Diagnostic -> Remark
+  llvm::remarks::Remark generateRemark() const;
+
+  StringRef getFunction() const {
+    if (functionName)
+      return *functionName;
+    return "<unknown function>";
+  }
+  StringRef getPassName() const { return passName; }
+  StringRef getRemarkName() const { return remarkName; }
+  std::string getMsg() const;
+
+  bool isVerbose() const { return isVerboseRemark; }
+
+  ArrayRef<RemarkKeyValue> getArgs() const { return args; }
+
+  llvm::remarks::Type getRemarkType() const;
+
+protected:
+  /// Keeps the MLIR diagnostic kind, which is used to determine the
+  /// diagnostic kind in the LLVM remark streamer.
+  RemarkKind remarkKind;
+  /// Name of the convering function like interface
+  std::optional<std::string> functionName;
+
+  Location loc;
+  /// Name of the pass that triggers this report.
+  const char *passName;
+
+  /// Textual identifier for the remark (single-word, CamelCase). Can be used
+  /// by external tools reading the output file for optimization remarks to
+  /// identify the remark.
+  StringRef remarkName;
+
+  /// RemarkKeyValues collected via the streaming interface.
+  SmallVector<RemarkKeyValue, 4> args;
+
+  /// The remark is expected to be noisy.
+  bool isVerboseRemark = false;
+
+private:
+  /// Convert the MLIR diagnostic severity to LLVM diagnostic severity.
+  static llvm::DiagnosticSeverity
+  makeLLVMSeverity(DiagnosticSeverity severity) {
+    switch (severity) {
+    case DiagnosticSeverity::Note:
+      return llvm::DiagnosticSeverity::DS_Note;
+    case DiagnosticSeverity::Warning:
+      return llvm::DiagnosticSeverity::DS_Warning;
+    case DiagnosticSeverity::Error:
+      return llvm::DiagnosticSeverity::DS_Error;
+    case DiagnosticSeverity::Remark:
+      return llvm::DiagnosticSeverity::DS_Remark;
+    }
+    llvm_unreachable("Unknown diagnostic severity");
+  }
+  /// Convert the MLIR remark kind to LLVM diagnostic kind.
+  static llvm::DiagnosticKind makeLLVMKind(RemarkKind remarkKind) {
+    switch (remarkKind) {
+    case RemarkKind::OptimizationRemarkUnknown:
+      return llvm::DiagnosticKind::DK_Generic;
+    case RemarkKind::OptimizationRemarkPass:
+      return llvm::DiagnosticKind::DK_OptimizationRemark;
+    case RemarkKind::OptimizationRemarkMissed:
+      return llvm::DiagnosticKind::DK_OptimizationRemarkMissed;
+    case RemarkKind::OptimizationRemarkFailure:
+      return llvm::DiagnosticKind::DK_OptimizationFailure;
+    case RemarkKind::OptimizationRemarkAnalysis:
+      return llvm::DiagnosticKind::DK_OptimizationRemarkAnalysis;
+    }
+    llvm_unreachable("Unknown diagnostic kind");
+  }
+};
+
+// clang-format off
+template <class RemarkT>
+decltype(auto) operator<<(
+    RemarkT &&r,
+    std::enable_if_t<std::is_base_of_v<RemarkBase,
+                                       std::remove_reference_t<RemarkT>>,
+                     StringRef>
+        s) {
+  r.insert(s);
+  return std::forward<RemarkT>(r);
+}
+
+template <class RemarkT>
+decltype(auto) operator<<(
+    RemarkT &&r,
+    std::enable_if_t<std::is_base_of_v<RemarkBase,
+                                       std::remove_reference_t<RemarkT>>,
+                     RemarkBase::RemarkKeyValue>
+        a) {
+  r.insert(std::move(a));
+  return std::forward<RemarkT>(r);
+}
+
+template <class RemarkT>
+decltype(auto) operator<<(
+    RemarkT &&r,
+    std::enable_if_t<std::is_base_of_v<RemarkBase,
+                                       std::remove_reference_t<RemarkT>>,
+                     RemarkBase::SetIsVerbose>
+        v) {
+  r.insert(v);
+  return std::forward<RemarkT>(r);
+}
+
+template <class RemarkT>
+decltype(auto) operator<<(
+    RemarkT &&r,
+    std::enable_if_t<std::is_base_of_v<RemarkBase,
+                                       std::remove_reference_t<RemarkT>>,
+                     RemarkBase::SetExtraArgs>
+        ea) {
+  r.insert(ea);
+  return std::forward<RemarkT>(r);
+}
+// clang-format on
+
+//===----------------------------------------------------------------------===//
+// Shorthand aliases for different kinds of remarks.
+//===----------------------------------------------------------------------===//
+
+template <RemarkKind K, DiagnosticSeverity S>
+class OptRemarkBase final : public RemarkBase {
+public:
+  explicit OptRemarkBase(Location loc, StringRef passName, StringRef remarkName)
+      : RemarkBase(K, S, passName.data(), remarkName, loc) {}
+
+  bool isEnabled() const override { return true; }
+};
+
+using OptRemarkAnalysis = OptRemarkBase<RemarkKind::OptimizationRemarkAnalysis,
+                                        DiagnosticSeverity::Remark>;
+
+using OptRemarkPass = OptRemarkBase<RemarkKind::OptimizationRemarkPass,
+                                    DiagnosticSeverity::Remark>;
+
+using OptRemarkMissed = OptRemarkBase<RemarkKind::OptimizationRemarkMissed,
+                                      DiagnosticSeverity::Remark>;
+
+using OptRemarkFailure = OptRemarkBase<RemarkKind::OptimizationRemarkFailure,
+                                       DiagnosticSeverity::Remark>;
+
+class RemarkEngine;
+
+//===----------------------------------------------------------------------===//
+// InFlightRemark
+//===----------------------------------------------------------------------===//
+
+/// InFlightRemark is a RAII class that holds a reference to a RemarkBase
+/// instance and allows to build the remark using the << operator. The remark
+/// is emitted when the InFlightRemark instance is destroyed, which happens
+/// when the scope ends or when the InFlightRemark instance is moved.
+/// Similar to InFlightDiagnostic, but for remarks.
+class InFlightRemark {
+public:
+  explicit InFlightRemark(RemarkBase *diag) : remark(diag) {}
+  InFlightRemark(RemarkEngine &eng, RemarkBase *diag)
+      : owner(&eng), remark(diag) {}
+
+  InFlightRemark() = default; // empty ctor
+
+  template <typename T>
+  InFlightRemark &operator<<(T &&arg) {
+    if (remark)
+      *remark << std::forward<T>(arg);
+    return *this;
+  }
+
+  explicit operator bool() const { return remark != nullptr; }
+
+  ~InFlightRemark();
+
+  InFlightRemark(const InFlightRemark &) = delete;
+  InFlightRemark &operator=(const InFlightRemark &) = delete;
+  InFlightRemark(InFlightRemark &&) = default;
+  InFlightRemark &operator=(InFlightRemark &&) = default;
+
+private:
+  RemarkEngine *owner{nullptr};
+  std::unique_ptr<RemarkBase> remark;
+};
+
+//===----------------------------------------------------------------------===//
+// MLIR Remark Streamer
+//===----------------------------------------------------------------------===//
+
+class MLIRRemarkStreamer {
+  llvm::remarks::RemarkStreamer &remarkStreamer;
+
+public:
+  explicit MLIRRemarkStreamer(llvm::remarks::RemarkStreamer &remarkStreamer)
+      : remarkStreamer(remarkStreamer) {}
+
+  void streamOptimizationRemark(const RemarkBase &remark);
+};
+
+//===----------------------------------------------------------------------===//
+// Remark Engine (MLIR Context will own this class)
+//===----------------------------------------------------------------------===//
+
+class RemarkEngine {
+private:
+  /// The category for missed optimization remarks.
+  std::optional<llvm::Regex> missFilter;
+  /// The category for passed optimization remarks.
+  std::optional<llvm::Regex> passFilter;
+  /// The category for analysis remarks.
+  std::optional<llvm::Regex> analysisFilter;
+  /// The category for failed optimization remarks.
+  std::optional<llvm::Regex> failedFilter;
+  /// The output file for the remarks.
+  std::unique_ptr<llvm::ToolOutputFile> remarksFile;
+  /// The MLIR remark streamer that will be used to emit the remarks.
+  std::unique_ptr<MLIRRemarkStreamer> remarkStreamer;
+  std::unique_ptr<llvm::remarks::RemarkStreamer> llvmRemarkStreamer;
+  /// When is enabled, engine also prints remarks as mlir::emitRemarks.
+  bool printAsEmitRemarks;
+  /// The main MLIR remark streamer that will be used to emit the remarks.
+  MLIRRemarkStreamer *getLLVMRemarkStreamer() { return remarkStreamer.get(); }
+  const MLIRRemarkStreamer *getLLVMRemarkStreamer() const {
+    return remarkStreamer.get();
+  }
+  void
+  setLLVMRemarkStreamer(std::unique_ptr<MLIRRemarkStreamer> remarkStreamer) {
+    this->remarkStreamer = std::move(remarkStreamer);
+  }
+
+  /// Get the main MLIR remark streamer that will be used to emit the remarks.
+  llvm::remarks::RemarkStreamer *getMainRemarkStreamer() {
+    return llvmRemarkStreamer.get();
+  }
+  const llvm::remarks::RemarkStreamer *getMainRemarkStreamer() const {
+    return llvmRemarkStreamer.get();
+  }
+  /// Set the main remark streamer to be used by the engine.
+  void setMainRemarkStreamer(
+      std::unique_ptr<llvm::remarks::RemarkStreamer> mainRemarkStreamer) {
+    llvmRemarkStreamer = std::move(mainRemarkStreamer);
+  }
+
+  /// Return true if missed optimization remarks are enabled, override
+  /// to provide different implementation.
+  bool isMissedOptRemarkEnabled(StringRef categoryName) const;
+
+  /// Return true if passed optimization remarks are enabled, override
+  /// to provide different implementation.
+  bool isPassedOptRemarkEnabled(StringRef categoryName) const;
+
+  /// Return true if analysis optimization remarks are enabled, override
+  /// to provide different implementation.
+  bool isAnalysisOptRemarkEnabled(StringRef categoryName) const;
+
+  /// Return true if analysis optimization remarks are enabled, override
+  /// to provide different implementation.
+  bool isFailedOptRemarkEnabled(StringRef categoryName) const;
+
+  /// Return true if any type of remarks are enabled for this pass.
+  bool isAnyRemarkEnabled(StringRef categoryName) const {
+    return (isMissedOptRemarkEnabled(categoryName) ||
+            isPassedOptRemarkEnabled(categoryName) ||
+            isFailedOptRemarkEnabled(categoryName) ||
+            isAnalysisOptRemarkEnabled(categoryName));
+  }
+
+  /// Emit a remark using the given maker function, which should return
+  /// a RemarkBase instance. The remark will be emitted using the main
+  /// remark streamer.
+  template <typename RemarkT, typename... Args>
+  InFlightRemark makeRemark(Args &&...args);
+
+  template <typename RemarkT>
+  InFlightRemark
+  emitIfEnabled(Location loc, StringRef passName, StringRef category,
+                bool (RemarkEngine::*isEnabled)(StringRef) const);
+
+public:
+  RemarkEngine() = delete;
+  /// Constructs Remark engine with optional category names
+  RemarkEngine(bool printAsEmitRemarks,
+               std::optional<std::string> categoryPassName = std::nullopt,
+               std::optional<std::string> categoryMissName = std::nullopt,
+               std::optional<std::string> categoryAnalysisName = std::nullopt,
+               std::optional<std::string> categoryFailedName = std::nullopt);
+
+  /// Destructor that will close the output file and reset the
+  /// main remark streamer.
+  ~RemarkEngine();
+
+  /// Setup the remark engine with the given output path and format.
+  llvm::Error initialize(StringRef outputPath, StringRef outputFormat);
+
+  /// Report a diagnostic remark.
+  void report(const RemarkBase &&diag);
+
+  /// Report a successful remark, this will create an InFlightRemark
+  /// that can be used to build the remark using the << operator.
+  InFlightRemark emitOptimizationRemark(Location loc, StringRef passName,
+                                        StringRef category);
+  /// Report a missed optimization remark
+  /// that can be used to build the remark using the << operator.
+  InFlightRemark emitOptimizationRemarkMiss(Location loc, StringRef passName,
+                                            StringRef category);
+  /// Report a failed optimization remark, this will create an InFlightRemark
+  /// that can be used to build the remark using the << operator.
+  InFlightRemark emitOptimizationRemarkFailure(Location loc, StringRef passName,
+                                               StringRef category);
+  /// Report an analysis remark, this will create an InFlightRemark
+  /// that can be used to build the remark using the << operator.
+  InFlightRemark emitOptimizationRemarkAnalysis(Location loc,
+                                                StringRef passName,
+                                                StringRef category);
+};
+
+//===----------------------------------------------------------------------===//
+// Emitters
+//===----------------------------------------------------------------------===//
+
+using Suggestion = RemarkBase::RemarkKeyValue;
+inline Suggestion suggest(StringRef txt) { return {"Suggestion", txt}; }
+
+template <typename Fn, typename... Args>
+[[nodiscard]] inline InFlightRemark withEngine(Fn fn, Location loc,
+                                               Args &&...args) {
+  MLIRContext *ctx = loc->getContext();
+
+  auto &enginePtr = ctx->getRemarkEngine();
+
+  if (RemarkEngine *eng = enginePtr.get())
+    return (eng->*fn)(loc, std::forward<Args>(args)...);
+
+  return {};
+}
+
+/// Report an optimization remark that was passed.
+inline InFlightRemark reportOptimizationPass(Location loc, StringRef cat,
+                                             StringRef passName) {
+  return withEngine(&RemarkEngine::emitOptimizationRemark, loc, passName, cat);
+}
+
+/// Report an optimization remark that was missed.
+inline InFlightRemark reportOptimizationMiss(Location loc, StringRef cat,
+                                             StringRef passName,
+                                             StringRef suggestion) {
+  auto r =
+      withEngine(&RemarkEngine::emitOptimizationRemarkMiss, loc, passName, cat);
+  if (r)
+    r << suggest(suggestion);
+  return r;
+}
+/// Report an optimization failure remark.
+inline InFlightRemark reportOptimizationFail(Location loc, StringRef cat,
+                                             StringRef passName) {
+  return withEngine(&RemarkEngine::emitOptimizationRemarkFailure, loc, passName,
+                    cat);
+}
+
+/// Report an optimization analysis remark.
+inline InFlightRemark reportOptimizationAnalysis(Location loc, StringRef cat,
+                                                 StringRef passName) {
+  return withEngine(&RemarkEngine::emitOptimizationRemarkAnalysis, loc,
+                    passName, cat);
+}
+
+} // namespace mlir
+
+#endif // MLIR_IR_REMARKS_H
\ No newline at end of file
diff --git a/mlir/lib/IR/CMakeLists.txt b/mlir/lib/IR/CMakeLists.txt
index 3ef69cea18f0a..438e486f2ad4c 100644
--- a/mlir/lib/IR/CMakeLists.txt
+++ b/mlir/lib/IR/CMakeLists.txt
@@ -33,6 +33,7 @@ add_mlir_library(MLIRIR
   PatternMatch.cpp
   Region.cpp
   RegionKindInterface.cpp
+  Remarks.cpp
   SymbolTable.cpp
   TensorEncoding.cpp
   Types.cpp
@@ -69,5 +70,10 @@ add_mlir_library(MLIRIR
 
   LINK_LIBS PUBLIC
   MLIRSupport
+  
+  LINK_COMPONENTS
+    Remarks
+    Core
+    BitstreamReader  
   )
 
diff --git a/mlir/lib/IR/MLIRContext.cpp b/mlir/lib/IR/MLIRContext.cpp
index 2d5381d43f863..5a37de1481eb9 100644
--- a/mlir/lib/IR/MLIRContext.cpp
+++ b/mlir/lib/IR/MLIRContext.cpp
@@ -25,12 +25,14 @@
 #include "mlir/IR/Location.h"
 #include "mlir/IR/OpImplementation.h"
 #include "mlir/IR/OperationSupport.h"
+#include "mlir/IR/Remarks.h"
 #include "llvm/ADT/DenseMap.h"
 #include "llvm/ADT/Twine.h"
 #include "llvm/Support/Allocator.h"
 #inclu...
[truncated]

This comment was marked as outdated.

@grypp grypp requested a review from razvanlupusoru August 7, 2025 11:20
Copy link
Collaborator

@joker-eph joker-eph left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks!

Some quick comments inline.

Can you write a doc in the mlir/docs folder?
I suspect it may be suitable as a new section in mlir/docs/Diagnostics.md (or linked from there otherwise).

@j2kun
Copy link
Contributor

j2kun commented Aug 8, 2025

Can you write a doc in the mlir/docs folder?

+1 I am not familiar with the LLVM remark infrastructure and I am curious what I could use this for in MLIR.

grypp added 4 commits August 8, 2025 07:30
This patch hooks LLVM’s remarks infrastructure into MLIR.

* Disabled by default, zero-cost when off.
  Enable per context with `MLIRContext::setupOptimizationRemarks(...)`.
* YAML or bitstream output: via new `MLIRRemarkStreamer`
  (subclasses `llvm::remarks::RemarkStreamer`).
* Implements simple APIs for passes:

User-code can directly prints remarks like:
```
reportOptimizationPass(loc, categoryName, "MyPass")        << "vectorized";
reportOptimizationMiss(loc, categoryLoopUnroll, "MyPass",
                       "try increasing unroll factor")      << "...";
reportOptimizationFail(loc, categoryLoopUnroll, "MyPass")  << "...";
reportOptimizationAnalysis(loc, categoryLoopUnroll, "MyPass")
                       << "estimated trip count: " << tripCount;
```
@grypp
Copy link
Member Author

grypp commented Aug 11, 2025

Can you write a doc in the mlir/docs folder?

+1 I am not familiar with the LLVM remark infrastructure and I am curious what I could use this for in MLIR.

I added a doc.

@grypp grypp changed the title [MLIR] Integrate LLVM Optimization Remarks Infrastructure [MLIR] Introduce RemarkEngine + pluggable remark streaming (YAML/Bitstream) Aug 11, 2025
@grypp
Copy link
Member Author

grypp commented Aug 11, 2025

I've updated the PR. I addressed the concerns.
@joker-eph @River707

Copy link
Collaborator

@joker-eph joker-eph left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(apparently I had unflushed comment, they may or may not be relevant, but flushing now)

Copy link
Contributor

@razvanlupusoru razvanlupusoru left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you Guray for your contribution! The latest design with the decoupled RemarkEngine and Streamer looks great.

From an implementation perspective, my only concern is that MLIR Diagnostics already supports "Remarks" as a type of message that can be emitted, which might confuse a compiler developer trying to emit such messages. So River's concerns seem valid about the framework duplications. However, one way the separation makes sense is if the requirements and documentation of the Remarks framework make it clear that these are user-facing messages. Thus, a set of guidelines around messages would really benefit this framework; and I would be happy to help contribute this.

Most of my comments are less about the implementation and more about this API in MLIR. This API should be easy for a compiler developer to use without ambiguities in reporting, and it should avoid words like "pass" and "optimization," which suggest only a subset of transformations that an MLIR-based compiler can handle. My suggestions are not about adding functionality but about removing unnecessary elements and clarifying the wording.

namespace mlir::remark {

/// Defines different remark kinds that can be used to categorize remarks.
enum class RemarkKind {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you please document what each of these kinds means?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's happening to "unknown" remarks?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I put unknown to avoid UB as a good practice in case it's forgotten to set.

but we don't have a way to generate unknown remarks today.

@joker-eph joker-eph requested a review from Copilot August 12, 2025 12:14
@grypp
Copy link
Member Author

grypp commented Aug 15, 2025

@joker-eph let me know if it's all good.

ping @razvanlupusoru @River707

Copy link
Contributor

@razvanlupusoru razvanlupusoru left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you Guray. I marked my requests for fewer remark kinds as resolved yesterday due to your justifications. I have just a few remaining small requests - but otherwise looks good to me. I am traveling so I won't look at this again for at least 1.5 weeks - and no need to wait for me as long as you get a +1 from the other reviewers.

@grypp
Copy link
Member Author

grypp commented Aug 20, 2025

ping reviewers @River707 @jpienaar @joker-eph

Copy link
Collaborator

@joker-eph joker-eph left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The MLIRContext changes are minimal, and everything seems well decoupled from a dependency point of view, so that all LGTM.

There are just a few comment threads that are still open at the moment.

Copy link
Collaborator

@joker-eph joker-eph left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't see anything left opened right now, LGTM.

It would be great to get another approval than mine still.

namespace mlir::remark {

/// Defines different remark kinds that can be used to categorize remarks.
enum class RemarkKind {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's happening to "unknown" remarks?

@joker-eph
Copy link
Collaborator

Could we plug this into mlir-opt with an option to enable the remarks as diagnostics? (and add a test pass that emit these)

@grypp
Copy link
Member Author

grypp commented Aug 21, 2025

Could we plug this into mlir-opt with an option to enable the remarks as diagnostics? (and add a test pass that emit these)

Yes, I thought it's better to do that in a follow-up PR so it's easy to read.

@joker-eph
Copy link
Collaborator

Lg

@grypp grypp merged commit 3d41197 into llvm:main Aug 21, 2025
9 checks passed
rupprecht added a commit to rupprecht/llvm-project that referenced this pull request Aug 21, 2025
rupprecht added a commit that referenced this pull request Aug 21, 2025
@aemerson
Copy link
Contributor

Aren't LLVM docs supposed to be in rst format, not md? I know LLMs prefer markdown but the sphinx doc generation uses rst.

@joker-eph
Copy link
Collaborator

This is MLIR: everything is markdown

@aemerson
Copy link
Contributor

This is MLIR: everything is markdown

Should we move all of LLVM docs to markdown too? It's a bit odd to have two different documentation formats in llvm-project. I'd prefer .md too for simplicity across the board.

@joker-eph
Copy link
Collaborator

I think that's a different topic, and it's probably not the right place to discuss. A RFC on Discourse would be better suited, and I suspect such a change would need to be motivated on its own: just saying we need to convert all of LLVM docs just because MLIR is using markdown seems a bit weak considering the effort, and there are so many things to improve already to the LLVM website and doc that this may not be where we need to invest the effort right now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

mlir:core MLIR Core Infrastructure mlir

Projects

None yet

Development

Successfully merging this pull request may close these issues.

9 participants